TEDdet: Temporal Feature Exchange and Difference Network for Online Real-Time Action Detection
نویسندگان
چکیده
Localizing and interpreting human actions in videos require understanding the spatial temporal context of scenes. Aside from accurate detection, vast sensing scenarios real-world also mandate incremental, instantaneous processing scenes under restricted computational budgets. However, state-of-the-art detectors fail to meet above criteria. The main challenge lies their heavy architectural designs detection pipelines reason pertinent spatiotemporal information, such as incorporating 3D Convolutoinal Neural Networks (CNN) or extracting optical flow. With this insight, we propose a lightweight action tubelet detector coined TEDdet which unifies complementary feature aggregation motion modeling modules. Specifically, our Temporal Feature Exchange module induces interaction by adaptively aggregating 2D CNN features over successive frames. To address actors’ location shift sequence, Difference accumulates approximated pair-wise among target frames trajectory cues. These modules can be easily integrated with an existing anchor-free cooperatively model instances’ categories, sizes, movement for precise generation. exploits larger strides efficiently infer coarse-to-fine online manner. Without relying on flow models, demonstrates competitive accuracy at unprecedented speed (89 FPS) that is more compliant realistic applications. Codes will available https://github.com/alphadadajuju/TEDdet .
منابع مشابه
Feature Point Detection for Real Time Applications
This paper presents a new feature point detector that is accurate, efficient and fast. A detailed qualitative evaluation of the proposed feature point detector for grayscale images is also done. Experiments have proved that this feature point detector is robust to affine transformations, noise and perspective deformations. More over the proposed detector requires only 22 additions per pixel to ...
متن کاملImproved Spatio-temporal Salient Feature Detection for Action Recognition
Spatio-temporal salient features localize the local motion events and are used to represent video sequences for many computer vision tasks such as action recognition. The robust detection of these features under geometric variations such as affine transformation and view/scale changes is however an open problem. Existing methods use the same filter for both time and space and hence, perform an ...
متن کاملFeature Extraction Methods for Real-Time Face Detection and Classification
We propose a complete scheme for face detection and recognition. We have used a Bayesian classifier for face detection and a nearest neighbor approach for face classification. To improve the performance of the classifier, a feature extraction algorithm based on a modified nonparametric discriminant analysis has also been implemented. The complete scheme has been tested in a real-time environmen...
متن کاملReal-time optical subtraction of photographic imagery for difference detection.
Interferometric techniques described in this paper permit real-time optical image subtraction of two input photograph transparencies without the necessity of intermediate processing steps (e.g., holograms or contact-print transparencies). These interferometric techniques allow the use of a white-light source as well as an extended light source, small input-collimator optics, and optical compone...
متن کاملTowards precise real-time 3D difference detection for industrial applications
3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. We present an approach for 3D difference detection with a hand-held depth camera. In contrast to previous approaches, with the presented approach geometric differences can be detected in real-time and from arbitrary viewpoints. The 3D difference detection accu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2022
ISSN: ['2169-3536']
DOI: https://doi.org/10.1109/access.2022.3164730